-
"My manager used ChatGPT to write my performance review and it didn't even understand my role."
-
This employee had good reason to doubt the fact that their manager had written their report themself. And had strong reason to suspect that they had used AI to generate it instead. The glaring problem was the fact that the report contained many contradictions and seemingly did not fully understand the nature of their work, thus giving them a slanted, unfair outlook on how they met their responsibilities.
-
-
As we move more into conversations about what tasks and responsibilities we can (and should) be getting AI to do for us, it's becoming clearer and clearer that capability doesn't always equal sitability.
It's kind of like the different ways in which we already communicate with one another. For instance, when we have something important to communicate, we would probably prefer to have formal in-person discussions. You wouldn't want your doctor to share bad news with you on a Post-it note written with crayon. You wouldn't want your significant other to break up with you via text message written in 2000s “textese” text abbreviations and l33t speak.
However, there are more informal situations where the Post-it note is more appropriate, such as a little note wishing your loved one a good day in their lunch. But 2000s “textese” text abbreviations and l33t speak will always be obnoxious, as they were at the time.
The same is true for the use of AI or “Large Language Models” (LLMs), which is a far more appropriate name for what they actually are. There are situations where they can do mundane repetitive tasks very well, whereas as long as someone can audit their frequent mistakes and hallucinations, there is very little social or human cost to having them handle these tasks. One that springs to mind for me is cutting out (masking) subjects on images for Photoshoping composite images. Doing this by hand required time and steady focus, and it was quite painstaking. Now it can be done in moments at the press of a button.
Though I lament a skill being made obsolete that I was actually quite proud of, I'm more than glad to offload that task to automation.
-
-
-
-
-
-
Sure, at some point, these learning algorithms may actually be able to think for themselves and pass a Turing test without gimmicks and sleight of hand that are closer to a magician's toolkit than to real automated intelligence. And at that point, these tools might even be useful for things that weren't already algorithmically automatable.
But, even when that point is reached in their development—if it ever is—there will always be tasks that need a human touch. Things that we handle ourselves, which exist solely because they are the things that make us human and allow us to have a connection with other people. Like with our sticky note, there will be times where anything AI outputs, even if it's “faster” and “good enough," still won't and shouldn't be acceptable. Things of either great personal importance or that are otherwise fundamentally crucial to get right.
Would you want to drive across a bridge that had been structurally evaluated by the same program that occasionally decides that 1+1=3? Would you entrust it to transfer a large sum of money for you? To write a parent's obituary?
Whether an AI should be involved at all in these situations, and to what degree, will be an ongoing debate, but with certainty, there will be times like this where it is always essential that a human has the final directing decision-making and evaluation of an automated task. “Human in the loop,” as it is called in the chess world, and I'm sure at some point in the near future, this or another such phrase will become commonplace and enter our vernacular.
When it comes to the setting of the workplace, 1-on-1s and performance evaluations are two such things where personal touch and communication will always be important. What is the point of them if they aren't serving to establish a better connection and alignment between two people in the organization?
Jamming an AI into the middle of all of that dehumanizes an interaction, and there are serious ethics involved in cases such as this, where the AI fails to get the facts straight and even struggles to identify the core competencies of the employee's role that it is evaluating.
But whether or not something is AI-generated is not the entire issue either. In instances like the one in this story, where you suspect but cannot prove that a document or written exchange was AI-generated, you will find yourself forever wondering and doubting whether or not what you have seen or experienced is genuine.
AI has already instilled this doubt in us as to whether or not something is entirely or partly AI-generated. Making us suspicious of every little thing we come across and driving us to pick apart every little thing for signs of AI use so that we can triumphantly declare that we were not fooled by it.
I see this all the time in comments around the internet, as well as in response to my own content and writing. Baseless accusations of something being “AI-generated" despite the fact that it very much is not. I bet at least a handful of you were doing your best impression of the Leonardo DiCaprio pointing meme when I used an em dash (—) earlier, which has supposedly become one of the hallmarks of AI-generated text. The reality is that LLMs were trained on articles like this one across the internet. Although its use is foreign to most, writers have used and misused the em dash extensively for the last 15 years and know the hotkey for it on their keyboards, so when LLMs were trained, the use became a hard-and-fast part of their writing style.
As many writers likely have, I have begrudgingly abandoned its use amid a storm of accusations—though I do still like to bring it out from time to time.
-
-
-
-
-
-
-
-
Like what you see? Follow Us and Add Us as a Preferred Source on Google.